6,313 research outputs found

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Parameterized Complexity of Chordal Conversion for Sparse Semidefinite Programs with Small Treewidth

    Full text link
    If a sparse semidefinite program (SDP), specified over n×nn\times n matrices and subject to mm linear constraints, has an aggregate sparsity graph GG with small treewidth, then chordal conversion will frequently allow an interior-point method to solve the SDP in just O(m+n)O(m+n) time per-iteration. This is a significant reduction over the minimum Ω(n3)\Omega(n^{3}) time per-iteration for a direct solution, but a definitive theoretical explanation was previously unknown. Contrary to popular belief, the speedup is not guaranteed by a small treewidth in GG, as a diagonal SDP would have treewidth zero but can still necessitate up to Ω(n3)\Omega(n^{3}) time per-iteration. Instead, we construct an extended aggregate sparsity graph GG\overline{G}\supseteq G by forcing each constraint matrix AiA_{i} to be its own clique in GG. We prove that a small treewidth in G\overline{G} does indeed guarantee that chordal conversion will solve the SDP in O(m+n)O(m+n) time per-iteration, to ϵ\epsilon-accuracy in at most O(m+nlog(1/ϵ))O(\sqrt{m+n}\log(1/\epsilon)) iterations. For classical SDPs like the MAX-kk-CUT relaxation and the Lovasz Theta problem, the two sparsity graphs coincide G=GG=\overline{G}, so our result provide a complete characterization for the complexity of chordal conversion, showing that a small treewidth is both necessary and sufficient for O(m+n)O(m+n) time per-iteration. Real-world SDPs like the AC optimal power flow relaxation have different graphs GGG\subseteq\overline{G} with similar small treewidths; while chordal conversion is already widely used on a heuristic basis, in this paper we provide the first rigorous guarantee that it solves such SDPs in O(m+n)O(m+n) time per-iteration. [Supporting code at https://github.com/ryz-codes/chordalConv/
    corecore